153 research outputs found

    Responsible Autonomy

    Full text link
    As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.Comment: IJCAI2017 (International Joint Conference on Artificial Intelligence

    Querying Social Practices in Hospital Context

    Get PDF
    Understanding the social contexts in which actions and interactions take place is of utmost importance for planning one’s goals and activities. People use social practices as means to make sense of their environment, assessing how that context relates to past, common experiences, culture and capabilities. Social practices can therefore simplify deliberation and planning in complex contexts. In the context of patient-centered planning, hospitals seek means to ensure that patients and their families are at the center of decisions and planning of the healthcare processes. This requires on one hand that patients are aware of the practices being in place at the hospital and on the other hand that hospitals have the means to evaluate and adapt current practices to the needs of the patients. In this paper we apply a framework for formalizing social practices of an organization to an emergency department that carries out patient-centered planning. We indicate how such a formalization can be used to answer operational queries about the expected outcome of operational actions.</p

    Group Norms for Multi-Agent Organisations

    Get PDF
    W. W. Vasconcelos acknowledges the support of the Engineering and Physical Sciences Research Council (EPSRC-UK) within the research project “Scrutable Autonomous Systems” (Grant No. EP/J012084/1). The authors thank the three anonymous reviewers for their comments, suggestions, and constructive criticisms. Thanks are due to Dr. Nir Oren, for comments on earlier versions of the article, and Mr. Seumas Simpson, for proofreading the manuscript. Any remaining mistakes are the sole responsibility of the authors.Peer reviewedPostprin

    A Framework for Organization-Aware Agents

    Get PDF

    Using intentional analysis to model knowledge management requirements in communities of practice

    Get PDF
    This working document presents a Knowledge Management (KM) fictitious scenario to be modeled using Intentional Analysis in order to guide us on choosing the appropriate Information System support for the given situation. In this scenario, a newcomer in a knowledge organization decides to join an existing Community of Practice (CoP) in order to share knowledge and adjust to his new working environment. The preliminary idea suggests that Tropos is used for the Intentional Analysis, allowing us to elicit the requirements for a KM system, followed by the use of Agent-Object-Relationship Modeling Language (AORML) on the architectural and detailed design phases of software development. Aside of this primary goal, we also intend to point out needs of extending the expressiveness of the current Intentional analysis modeling language we are using and to check where the methodology could be improved in order to make it more usable. This is the first version of this working document, which we aim to constantly update with our new findings resulting of progress in the analysis

    Modelling Human Routines: Conceptualising Social Practice Theory for Agent-Based Simulation

    Get PDF
    Our routines play an important role in a wide range of social challenges such as climate change, disease outbreaks and coordinating staff and patients in a hospital. To use agent-based simulations (ABS) to understand the role of routines in social challenges we need an agent framework that integrates routines. This paper provides the domain-independent Social Practice Agent (SoPrA) framework that satisfies requirements from the literature to simulate our routines. By choosing the appropriate concepts from the literature on agent theory, social psychology and social practice theory we ensure SoPrA correctly depicts current evidence on routines. By creating a consistent, modular and parsimonious framework suitable for multiple domains we enhance the usability of SoPrA. SoPrA provides ABS researchers with a conceptual, formal and computational framework to simulate routines and gain new insights into social systems
    corecore